Normal view MARC view ISBD view

Deep neural networks in a mathematical framework

By: Caterini, Anthony L.
Contributor(s): Chang, Dong Eui.
Material type: materialTypeLabelBookSeries: SpringerBriefs in Computer Science. Publisher: Cham : Springer International Publishing, 2018Description: xiii, 84 p. ; 23.5 cm.ISBN: 9783319753034.Subject(s): Computer science | Artificial intelligence | Pattern perception | Pattern Recognition | Neural networks | Optical pattern recognitionDDC classification: 006.32 Summary: This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks. This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but also to those outside of the neutral network community.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Call number Status Date due Barcode
Books 006.32 CAT (Browse shelf) Available 031786

Includes bibliographical references.

This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks. This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but also to those outside of the neutral network community.

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha